4 research outputs found

    Deep learning framework for subject-independent emotion detection using wireless signals.

    Get PDF
    Emotion states recognition using wireless signals is an emerging area of research that has an impact on neuroscientific studies of human behaviour and well-being monitoring. Currently, standoff emotion detection is mostly reliant on the analysis of facial expressions and/or eye movements acquired from optical or video cameras. Meanwhile, although they have been widely accepted for recognizing human emotions from the multimodal data, machine learning approaches have been mostly restricted to subject dependent analyses which lack of generality. In this paper, we report an experimental study which collects heartbeat and breathing signals of 15 participants from radio frequency (RF) reflections off the body followed by novel noise filtering techniques. We propose a novel deep neural network (DNN) architecture based on the fusion of raw RF data and the processed RF signal for classifying and visualising various emotion states. The proposed model achieves high classification accuracy of 71.67% for independent subjects with 0.71, 0.72 and 0.71 precision, recall and F1-score values respectively. We have compared our results with those obtained from five different classical ML algorithms and it is established that deep learning offers a superior performance even with limited amount of raw RF and post processed time-sequence data. The deep learning model has also been validated by comparing our results with those from ECG signals. Our results indicate that using wireless signals for stand-by emotion state detection is a better alternative to other technologies with high accuracy and have much wider applications in future studies of behavioural sciences

    Microwave characterization of two Ba 0.6 Sr 0.4 TiO 3 dielectric thin films with out-of-plane and in-plane electrode structures

    Get PDF
    Ferroelectric (FE) thin films have recently attracted renewed interest in research due to their great potential for designing novel tunable electromagnetic devices such as large intelligent surfaces (LISs). However, the mechanism of how a polar structure in the FE thin films contributes to desired tunable performance, especially within the microwave frequency range, which is the most widely used frequency range of electromagnetics, has not been illustrated clearly. In this paper, we described several straightforward and cost-effective methods to fabricate and characterize Ba0.6Sr0.4TiO3 (BST) thin films at microwave frequencies. The prepared BST thin films here exhibit homogenous structures and great tunability (η) in a wide frequency and temperature range when the applied field is in the out-of-plane direction. The high tunability can be attributed to high concentration of polar nanoclusters. Their response to the applied direct current (DC) field was directly visualized using a novel non-destructive near-field scanning microwave microscopy (NSMM) technique. Our results have provided some intriguing insights into the application of the FE thin films for future programmable high-frequency devices and systems.</p

    Compressive Sensing Radar Imaging with Convolutional Neural Networks

    No full text
    In the area of radar imaging at any frequency band from microwave to optics, the technique of compressive sensing (CS) enables high resolution with reduced number of antenna elements and measurements. However, CS methods suffer from high computational complexity and require parameter tuning to ensure good image reconstruction under different noise, sparsity and undersampling levels. To alleviate such issues, we present a machine learning approach that combines CS and convolutional neural network (CNN) for radar imaging. This CS based CNN (CS-CNN) method maintains good characteristics of CS methods, such as sparse sampling and high resolving power but is free from time-consuming computer optimization and demanding spaces for data storage. In the meantime, it is also robust to environment changes like noise, target sparsity and sampling rate. We have conducted extensive computer simulations for both qualitative and quantitative evaluations. Finally, we experimentally validate the technique with a demonstration of stable high resolution imaging using a sparse multiple-input multiple-output (MIMO) array where traditional imaging methods suffer from serious grating lobes. This approach is generic and can be easily extended to other applications of electromagnetic imaging and sensing

    Deep learning for behaviour classification in a preclinical brain injury model.

    Get PDF
    The early detection of traumatic brain injuries can directly impact the prognosis and survival of patients. Preceding attempts to automate the detection and the assessment of the severity of traumatic brain injury continue to be based on clinical diagnostic methods, with limited tools for disease outcomes in large populations. Despite advances in machine and deep learning tools, current approaches still use simple trends of statistical analysis which lack generality. The effectiveness of deep learning to extract information from large subsets of data can be further emphasised through the use of more elaborate architectures. We therefore explore the use of a multiple input, convolutional neural network and long short-term memory (LSTM) integrated architecture in the context of traumatic injury detection through predicting the presence of brain injury in a murine preclinical model dataset. We investigated the effectiveness and validity of traumatic brain injury detection in the proposed model against various other machine learning algorithms such as the support vector machine, the random forest classifier and the feedforward neural network. Our dataset was acquired using a home cage automated (HCA) system to assess the individual behaviour of mice with traumatic brain injury or non-central nervous system (non-CNS) injured controls, whilst housed in their cages. Their distance travelled, body temperature, separation from other mice and movement were recorded every 15 minutes, for 72 hours weekly, for 5 weeks following intervention. The HCA behavioural data was used to train a deep learning model, which then predicts if the animals were subjected to a brain injury or just a sham intervention without brain damage. We also explored and evaluated different ways to handle the class imbalance present in the uninjured class of our training data. We then evaluated our models with leave-one-out cross validation. Our proposed deep learning model achieved the best performance and showed promise in its capability to detect the presence of brain trauma in mice
    corecore